New Times,
New Thinking.

  1. Politics
9 September 2020updated 27 Oct 2022 2:44pm

What facial recognition technology means

Proponents of the surveillance technology often liken it to CCTV, but it is much more invasive.

By Sanjana Varghese

In 2017 Ed Bridges, a former Liberal Democrat councillor in Gabalfa in Cardiff, was captured on camera by the South Wales Police while he was shopping. A few months later, during a protest against the arms trade, his image was scanned again. In the world’s first legal challenge against the use of facial recognition (FR) by police, Bridges took the South Wales Police to court, claiming that his privacy had been infringed. 

In August, the Court of Appeal found that in taking Bridges’ image, the South Wales Police had violated data protection laws, and had failed to consider the discriminatory impact of facial recognition technology. The ruling identified “fundamental deficiencies” in the legal framework around FR; there was no clear guidance on how it could be used by the police.

The judgment – the first of its kind – means that the South Wales Police has halted its trial of the surveillance technology. But it insists that FR is essential to public protection. 

Facial recognition creates a map of the face using certain landmarks – the slant of a nose, the valley of a dimple – which are then stored and compared with images on existing watchlists or live imagery.

Between 2017 and 2020, the South Wales Police scanned more than 500,000 faces (though some of these may have been the same individuals). In London, the Metropolitan Police began testing FR in 2016 at the Notting Hill Carnival, and implemented live facial recognition at the start of 2020.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

Professor Pete Fussey, a sociologist at the University of Essex who conducted an independent review of the Met’s facial recognition trials, found that the technology was accurate in 19 per cent of the cases surveyed.

Critics of FR in both the UK and US have already seen their warnings about misidentification and indiscriminate use come to pass. In January, a black man in Detroit was wrongly identified by an FR system as the suspect in a robbery and was jailed for a night (despite having an alibi). In New York, the police department used FR to identify and arrest a Black Lives Matter activist weeks after the protests against police brutality had taken place. 

In the UK, there is no legislation authorising the use of facial recognition systems, whether by public or private actors. The Home Office would be the appropriate governing body to propose a legal framework (which it has yet to confirm it will do). In the meantime, other forms of biometric surveillance – such as temperature scanning and “emotion recognition” – are proliferating. Their use in private buildings, at airports and on public transport is growing, without legal oversight.

Critics have also highlighted racial biases in the use of FR. In the US in 2019, the National Institute of Standards and Technology tested 189 FR algorithms used by 99 organisations and found that members of some groups, such as Asian-American men, were misidentified up to 100 times more frequently than their white counterparts. An earlier MIT study found that two facial analysis programs that had been released commercially had misidentified darker-skinned women, with error rates of up to 34 per cent.

Proponents of the technology often liken it to CCTV, because it records people’s movements in real time. But FR is much more invasive because it extracts biometric data from individuals.

Opposing the unchecked spread of FR is not necessarily about the rights of the individual, nor is it about the right to privacy. Given there are well-known problems with algorithmic bias, researchers and technologists have suggested that any kind of legislation on the use of new technology has to be proactive and protect against future harm. It’s not enough to legislate for a moment that has already happened, such as the Bridges case. 

Other scholars and activists, however, such as Simone Browne, author of Dark Matters: On the Surveillance of Blackness (2015), and Joy Buolamwini, founder of research organisation Algorithmic Justice, have long argued that facial recognition reproduces social inequalities.

They maintain that the racial biases of FR are not just design flaws, but are built into the foundations of these technologies – through the placement of cameras, the choice of who to film, and the pre-existing data that algorithms are trained and developed with. Questions around facial recognition are similar to those concerning CCTV in that both reify imbalances of power under the guise of safety. But that’s where the similarities end. 

This problem applies to more than facial recognition. As the A-level exam results debacle shows, algorithmic injustice is no longer a niche issue, but has become a defining problem of our political and social orders.

In the US, during protests over police brutality, companies such as IBM, Microsoft and Amazon promised that they would withhold facial recognition software from police forces because of concerns over racial profiling.

But that won’t stop police running footage found on social media through facial recognition systems, for example. Nor does it prevent private firms from mining platforms such as Facebook and LinkedIn for people’s faces, as the New York firm Clearview AI’s technology reportedly does, and then selling their proprietary algorithms to corporations and government agencies (the US Immigration and Customs Enforcement agency has purchased facial-recognition software from Clearview). 

Yet, the tide might be turning. Several US cities, such as Boston and San Francisco, have passed moratoriums on the use of facial recognition systems following concerted efforts from activists, civil rights groups and technologists. At apartment buildings, tenants are resisting the use of FR scanners at their front doors, and students are doing the same in their classrooms. The spread of surveillance technologies – in our neighbourhoods and office buildings – may not be inevitable. 

Content from our partners
Water security: is it a government priority?
Defend, deter, protect: the critical capabilities we rely on
The death - and rebirth - of public sector consultancy